Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRand, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.
translated by 谷歌翻译
Temporal Graph Neural Network (TGNN) has been receiving a lot of attention recently due to its capability in modeling time-evolving graph-related tasks. Similar to Graph Neural Networks, it is also non-trivial to interpret predictions made by a TGNN due to its black-box nature. A major approach tackling this problems in GNNs is by analyzing the model' responses on some perturbations of the model's inputs, called perturbation-based explanation methods. While these methods are convenient and flexible since they do not need internal access to the model, does this lack of internal access prevent them from revealing some important information of the predictions? Motivated by that question, this work studies the limit of some classes of perturbation-based explanation methods. Particularly, by constructing some specific instances of TGNNs, we show (i) node-perturbation cannot reliably identify the paths carrying out the prediction, (ii) edge-perturbation is not reliable in determining all nodes contributing to the prediction and (iii) perturbing both nodes and edges does not reliably help us identify the graph's components carrying out the temporal aggregation in TGNNs.
translated by 谷歌翻译
在过去的几年中,已经引入了许多基于输入数据扰动的解释方法,以提高我们对黑盒模型做出的决策的理解。这项工作的目的是引入一种新颖的扰动方案,以便可以获得更忠实和强大的解释。我们的研究重点是扰动方向对数据拓扑的影响。我们表明,在对离散的Gromov-Hausdorff距离的最坏情况分析以及通过持久的同源性的平均分析中,沿输入歧管的正交方向的扰动更好地保留了数据拓扑。从这些结果中,我们引入EMAP算法,实现正交扰动方案。我们的实验表明,EMAP不仅改善了解释者的性能,而且还可以帮助他们克服最近对基于扰动的方法的攻击。
translated by 谷歌翻译
尽管最近关于了解深神经网络(DNN)的研究,但关于DNN如何产生其预测的问题仍然存在许多问题。特别是,给定对不同输入样本的类似预测,基本机制是否会产生这些预测?在这项工作中,我们提出了Neucept,这是一种局部发现关键神经元的方法,该神经元在模型的预测中起着重要作用,并确定模型的机制在产生这些预测中。我们首先提出一个关键的神经元识别问题,以最大程度地提高相互信息目标的序列,并提供一个理论框架,以有效地解决关键神经元,同时控制精度。Neucept接下来以无监督的方式学习了不同模型的机制。我们的实验结果表明,Neucept鉴定的神经元不仅对模型的预测具有强大的影响,而且还具有有关模型机制的有意义的信息。
translated by 谷歌翻译
时间图神经网络(TGNN)由于能够捕获图形拓扑依赖性和非线性时间动力学的能力而广泛用于建模与图形相关的任务。TGNN的解释对于透明和值得信赖的模型至关重要。但是,复杂的拓扑结构和时间依赖性使解释TGNN模型非常具有挑战性。在本文中,我们为TGNN模型提出了一个新颖的解释器框架。给定图表上的时间序列待解释,该框架可以在一个时间段内以概率图形模型的形式识别出主要的解释。关于运输域的案例研究表明,所提出的方法可以在一段时间内发现道路网络中的动态依赖性结构。
translated by 谷歌翻译
在本文中,我们表明,不断学习新任务和记住先前任务的过程引入了未知的隐私风险和挑战以限制隐私损失。基于此,我们介绍了终身DP的正式定义,其中任何数据元组在任何任务的训练集中都受到保护,在始终界限的DP保护下,鉴于越来越多的任务流。始终如一的DP意味着只有一个固定值的DP隐私预算,而不管任务的数量多少。为了保留终身DP,我们提出了一种可扩展和异质的算法,称为L2DP-ML和流批培训,以有效地训练并继续释放L2M型号的新版本,鉴于数据大小和任务训练顺序, ,不影响DP保护私人培训集。端到端的理论分析和彻底的评估表明,我们的机制明显好于保存终身DP的基线方法。 L2DP-ML的实现可在以下网址获得:https://github.com/haiphannjit/privatedeeplearning。
translated by 谷歌翻译
二重性优化已应用于各种机器学习模型。近年来已经开发了许多随机的二元优化算法。但是,他们中的大多数都限制了他们对单机器设置的关注,因此他们无法处理分布式数据。为了解决这个问题,在所有参与者组成网络并在该网络中执行点对点通信的设置,我们基于梯度跟踪通信机制和两个不同的梯度估计器开发了两个新颖的分布式随机双光线优化算法。此外,我们证明他们可以实现$ o(\ frac {1} {\ epsilon^{2} {2}(1- \ lambda)^2})$和$ o(\ frac {1} {\ epsilon^{3/ 2}(1- \ lambda)^2})$收敛率分别以获取$ \ epsilon $ - 准确解决方案,其中$ 1- \ lambda $表示通信网络的频谱差距。据我们所知,这是实现这些理论结果的第一项工作。最后,我们将算法应用于实用的机器学习模型,实验结果证实了我们算法的功效。
translated by 谷歌翻译
近年来,知识蒸馏(KD)被认为是模型压缩和加速度的关键解决方案。在KD中,通过最大限度地减少两者的概率输出之间的分歧,一项小学生模型通常从大师模型中培训。然而,如我们实验中所示,现有的KD方法可能不会将老师的批判性解释知识转移给学生,即两种模型所做的预测的解释并不一致。在本文中,我们提出了一种新颖的可解释的知识蒸馏模型,称为XDistillation,通过该模型,解释信息都从教师模型转移到学生模型。 Xdistillation模型利用卷积的自动统计学器的想法来近似教师解释。我们的实验表明,由Xdistillation培训的模型优于传统KD方法的那些不仅在预测准确性的术语,而且对教师模型的忠诚度。
translated by 谷歌翻译
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data. To prevent privacy leakages in GNNs, we propose a novel heterogeneous randomized response (HeteroRR) mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees without an undue cost of data and model utility in training GNNs. Our idea is to balance the importance and sensitivity of nodes' features and edges in redistributing the privacy budgets since some features and edges are more sensitive or important to the model utility than others. As a result, we derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges departing from existing approaches, thus enabling us to maintain high data utility for training GNNs. An extensive theoretical and empirical analysis using benchmark datasets shows that HeteroRR significantly outperforms various baselines in terms of model utility under rigorous privacy protection for both nodes' features and edges. That enables us to defend PIAs in DP-preserving GNNs effectively.
translated by 谷歌翻译
在过去的两年中,从2020年到2021年,Covid-19在包括越南在内的许多国家 /地区都破坏了预防疾病措施,并对人类生活和社会社区的各个方面产生了负面影响。此外,社区中的误导性信息和有关大流行的虚假新闻也是严重的情况。因此,我们提出了第一个基于越南社区的问题答复数据集,用于开发COVID-19的问题答案系统,称为UIT-VICOV19QA。该数据集包括从可信赖的医疗来源收集的4,500对提问,至少有一个答案,每个问题最多有四个独特的解释答案。除数据集外,我们还建立了各种深度学习模型作为基线,以评估数据集的质量,并通过BLEU,Meteor和Rouge-l等常用指标来进一步研究基准结果,以进行进一步的研究。我们还说明了对这些模型进行多个解释答案的积极影响,尤其是在变压器上 - 研究领域的主要结构。
translated by 谷歌翻译